80 research outputs found

    An automatic method for extracting citations from Google Books

    Get PDF
    Recent studies have shown that counting citations from books can help scholarly impact assessment and that Google Books (GB) is a useful source of such citation counts, despite its lack of a public citation index. Searching GB for citations produces approximate matches, however, and so its raw results need time‐consuming human filtering. In response, this article introduces a method to automatically remove false and irrelevant matches from GB citation searches in addition to introducing refinements to a previous GB manual citation extraction method. The method was evaluated by manual checking of sampled GB results and comparing citations to about 14,500 monographs in the Thomson Reuters Book Citation Index (BKCI) against automatically extracted citations from GB across 24 subject areas. GB citations were 103% to 137% as numerous as BKCI citations in the humanities, except for tourism (72%) and linguistics (91%), 46% to 85% in social sciences, but only 8% to 53% in the sciences. In all cases, however, GB had substantially more citing books than did BKCI, with BKCI's results coming predominantly from journal articles. Moderate correlations between the GB and BKCI citation counts in social sciences and humanities, with most BKCI results coming from journal articles rather than books, suggests that they could measure the different aspects of impact, however.University of Wolverhampto

    Figshare: A universal repository for academic resource sharing?

    Get PDF
    Purpose A number of subject-orientated and general websites have emerged to host academic resources. It is important to evaluate the uptake of such services in order to decide which depositing strategies are effective and should be encouraged. Design/methodology/approach This article evaluates the views and shares of resources in the generic repository Figshare by subject category and resource type. Findings Figshare use and common resource types vary substantially by subject category but resources can be highly viewed even in subjects with few members. Subject areas with more resources deposited do not tend to have higher viewing or sharing statistics. Practical implications Limited uptake of Figshare within a subject area should not be a barrier to its use. Several highly successful innovative uses for Figshare show that it can reach beyond a purely academic audience. Originality/value This is the first analysis of the uptake and use of a generic academic resource sharing repository

    Web indicators for research evaluation. Part 1: Citations and links to academic articles from the Web

    Get PDF
    The extensive use of the web by many sectors of society has created the potential for new wider impact indicators. This article reviews research about Google Scholar and Google Patents, both of which can be used as sources of impact indicators for academic articles. It also briefly reviews methods to extract types of links and citations from the web as a whole, although the indicators that these generate are now probably too broad and too dominated by automatically generated websites, such as library and publisher catalogues, to be useful in practice. More valuable web-based indicators can be derived from specific types of web pages that cite academic research, such as online presentations, course syllabi, and science blogs. These provide evidence that is easier to understand and use and less likely to be affected by unwanted types of automatically generated content, although they are susceptible to gaming

    News stories as evidence for research? BBC citations from articles, books and Wikipedia

    Get PDF
    This is an accepted manuscript of an article published by John Wiley & Sons in Journal of the Association for Information Science and Technology on 17/07/2017, available online: https://doi.org/10.1002/asi.23862 The accepted version of the publication may differ from the final published version.Although news stories target the general public and are sometimes inaccurate, they can serve as sources of real-world information for researchers. This article investigates the extent to which academics exploit journalism using content and citation analyses of online BBC News stories cited by Scopus articles. A total of 27,234 Scopus-indexed publications have cited at least one BBC News story, with a steady annual increase. Citations from arts and humanities (2.8% of publications in 2015) and social sciences (1.5%) were more likely than citations from medicine (0.1%) and science (<0.1%). Surprisingly, half of the sampled Scopus-cited science and technology (53%) and medicine and health (47%) stories were based on academic research, rather than otherwise unpublished information, suggesting that researchers have chosen a lower quality secondary source for their citations. Nevertheless, the BBC News stories that were most frequently cited by Scopus, Google Books and Wikipedia introduced new information from many different topics, including politics, business, economics, statistics, and reports about events. Thus, news stories are mediating real world knowledge into the academic domain, a potential cause for concern

    Do journal data sharing mandates work? Life sciences evidence from Dryad

    Get PDF
    Purpose: Data sharing is widely thought to help research quality and efficiency. Since data sharing mandates are increasingly adopted by journals this paper assesses whether they work. Design/methodology: This study examines two evolutionary biology journals, Evolution and Heredity, that have data sharing mandates and make extensive use of Dryad. It uses a quantitative analysis of presence in Dryad, downloads and citations. Findings: Within both journals, data sharing seems to be complete showing that the mandates work on a technical level. Low correlations (0.15-0.18) between data downloads and article citation counts for articles published in 2012 within these journals indicate a weak relationship between data sharing and research impact. An average of 40-55 data downloads per article after a few years suggests that some use is found for shared life sciences data. Research limitations: The value of shared data uses is unclear. Practical implications: Data sharing mandates should be encouraged as an effective strategy. Originality/value: This is the first analysis of the effectiveness of data sharing mandates

    Web indicators for research evaluation. Part 2: Social media metrics

    Get PDF
    This literature review assesses indicators derived from social media sources, including both general and academic sites. Such indicators have been termed altmetrics, influmetrics, social media metrics, or a type of webometric, and have recently been commercialised by a number of companies and employed by some publishers and university administrators. The social media metrics analysed here derive mainly from Twitter, Facebook, Google+, F1000, Mendeley, ResearchGate, and Academia.edu. They have the apparent potential to deliver fast, free indicators of the wider societal impact of research, or of different types of academic impacts, complementing academic impact indicators from traditional citation indexes. Although it is unwise to employ them in formal evaluations with stakeholders, due to their susceptibility to gaming and lack of real evidence that they reflect wider research impacts, they are useful for formative evaluations and to investigate science itself. Mendeley reader counts are particularly promising

    Web indicators for research evaluation. Part 3: books and non standard outputs

    Get PDF
    This literature review describes web indicators for the impact of books, software, datasets, videos and other non-standard academic outputs. Although journal articles dominate academic research in the health and natural sciences, other types of outputs can make equally valuable contributions to scholarship and are more common in other fields. It is not always possible to get useful citation-based impact indicators for these due to their absence from, or incomplete coverage in, traditional citation indexes. In this context, the web is particularly valuable as a potential source of impact indicators for non-standard academic outputs. The main focus in this review is on books because of the much greater amount of relevant research for them and because they are regarded as particularly valuable in the arts and humanities and in some areas of the social sciences

    Can Microsoft Academic assess the early citation impact of in-press articles? A multi-discipline exploratory analysis

    Get PDF
    Many journals post accepted articles online before they are formally published in an issue. Early citation impact evidence for these articles could be helpful for timely research evaluation and to identify potentially important articles that quickly attract many citations. This article investigates whether Microsoft Academic can help with this task. For over 65,000 Scopus in-press articles from 2016 and 2017 across 26 fields, Microsoft Academic found 2-5 times as many citations as Scopus, depending on year and field. From manual checks of 1,122 Microsoft Academic citations not found in Scopus, Microsoft Academic���s citation indexing was faster but not much wider than Scopus for journals. It achieved this by associating citations to preprints with their subsequent in-press versions and by extracting citations from in-press articles. In some fields its coverage of scholarly digital libraries, such as arXiv.org, was also an advantage. Thus, Microsoft Academic seems to be a more comprehensive automatic source of citation counts for in-press articles than Scopus

    Online impact – surveying the websites most commonly cited in impact case studies

    Get PDF
    Reporting on their recent survey of websites cited in REF 2014 impact case studies, Kayvan Kousha, Mike Thelwall and Mahshid Abdoli, discuss which websites are most commonly used as supporting evidence for impact and how these vary across academic disciplines

    Do Mendeley reader counts reflect the scholarly impact of conference papers? An investigation of computer science and engineering

    Get PDF
    This is an accepted manuscript of an article published by Springer in Scientometrics on 13/04/2017, available online: https://doi.org/10.1007/s11192-017-2367-1 The accepted version of the publication may differ from the final published version.Counts of Mendeley readers may give useful evidence about the impact of published re-search. Although previous studies have found significant positive correlations between counts of Mendeley readers and citation counts for journal articles, it is not known if this is equally true for conference papers. To fill this gap, Mendeley readership data and Scopus citation counts were extracted for both journal articles and conference papers published in 2011 in four fields for which conferences are important: Computer Science Applications; Computer Software; Building & Construction Engineering; and Industrial & Manufacturing Engineer-ing. Mendeley readership counts correlated moderately with citation counts for both journal articles and conference papers in Computer Science Applications and Computer Software. The correlations were much lower between Mendeley readers and citation counts for confer-ence papers than for journal articles in Building & Construction Engineering and Industrial & Manufacturing Engineering. Hence, there seem to be disciplinary differences in the useful-ness of Mendeley readership counts as impact indicators for conference papers, even between fields for which conferences are important
    corecore